Sign-methods for training with imprecise error function and gradient values

نویسندگان

  • George D. Magoulas
  • Vassilis P. Plagianakos
  • Michael N. Vrahatis
چکیده

Training algorithms suitable to work under imprecise conditions are proposed. They require only the algebraic sign of the error function or its gradient to be correct, and depending on the way they update the weights, they are analyzed as composite nonhear Successive OverRRlaxation (SOR) methods or composite nonlinear Jacobi methods, applied to the gradient of the error function. The local convergence behavior of the proposed algorithms is also studied. The proposed approach seems practically useful when training is affected by technology imperfections, limited precision in operations and data, hardware component variations and environmental changes that cause unpredictable deviations of parameter values from the designed configuration. Therefore, it may be difficult or impossible to obtain very precise values for the error function and the gradient of the error during training.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Methods of Optimization in Imprecise Data Envelopment Analysis

  In this paper imprecise target models has been proposed to investigate the relation between imprecise data envelopment analysis (IDEA) and mini-max reference point formulations. Through these models, the decision makers' preferences are involved in interactive trade-off analysis procedures in multiple objective linear programming with imprecise data. In addition, the gradient projection type...

متن کامل

A Training Method for Discrete

In this contribution a new training method is proposed for neural networks that are based on neurons whose output can be in a particular state. This method minimises the well known least square criterion by using information concerning only the signs of the error function and inaccurate gradient values. The algorithm is based on a modified one-dimensional bisection method and it treats supervis...

متن کامل

The Sarprop Algorithm: a Simulated Annealing Enhancement to Resilient Back Propagation

Back Propagation and its variations are widely used as methods for training artificial neural networks. One such variation, Resilient Back Propagation (RPROP), has proven to be one of the best in terms of speed of convergence. Our SARPROP enhancement, based on Simulated Annealing, is described in this paper and is shown to increase the rate of convergence for some problems. The extension involv...

متن کامل

Geoid Determination Based on Log Sigmoid Function of Artificial Neural Networks: (A case Study: Iran)

A Back Propagation Artificial Neural Network (BPANN) is a well-known learning algorithmpredicated on a gradient descent method that minimizes the square error involving the networkoutput and the goal of output values. In this study, 261 GPS/Leveling and 8869 gravity intensityvalues of Iran were selected, then the geoid with three methods “ellipsoidal stokes integral”,“BPANN”, and “collocation” ...

متن کامل

A conjugate gradient based method for Decision Neural Network training

Decision Neural Network is a new approach for solving multi-objective decision-making problems based on artificial neural networks. Using inaccurate evaluation data, network training has improved and the number of educational data sets has decreased. The available training method is based on the gradient decent method (BP). One of its limitations is related to its convergence speed. Therefore,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999